cacache
cacache
is a Node.js library for managing
local key and content address caches. It's really fast, really good at
concurrency, and it will never give you corrupted data, even if cache files
get corrupted or manipulated.
It was originally written to be used as npm's local cache, but
can just as easily be used on its own
Install
$ npm install --save cacache
Table of Contents
Example
const cacache = require('cacache')
const fs = require('fs')
const tarball = '/path/to/mytar.tgz'
const cachePath = '/tmp/my-toy-cache'
const key = 'my-unique-key-1234'
cacache.put(cachePath, key, '10293801983029384').then(digest => {
console.log(`Saved content to ${cachePath}.`)
})
const destination = '/tmp/mytar.tgz'
cacache.get.stream(
cachePath, key
).pipe(
fs.createWriteStream(destination)
).on('finish', () => {
console.log('done extracting!')
})
cacache.get.byDigest(cachePath, tarballSha512).then(data => {
fs.writeFile(destination, data, err => {
console.log('tarball data fetched based on its sha512sum and written out!')
})
})
Features
- Extraction by key or by content address (shasum, etc)
- Multi-hash support - safely host sha1, sha512, etc, in a single cache
- Automatic content deduplication
- Fault tolerance (immune to corruption, partial writes, etc)
- Consistency guarantees on read and write (full data verification)
- Lockless, high-concurrency cache access
- Streaming support
- Promise support
- Pretty darn fast
- Arbitrary metadata storage
- Garbage collection and additional offline verification
Contributing
The cacache team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The Contributor Guide has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear.
API
> cacache.ls(cache) -> Promise
Lists info for all entries currently in the cache as a single large object. Each
entry in the object will be keyed by the unique index key, with corresponding
get.info
objects as the values.
Example
cacache.ls(cachePath).then(console.log)
{
'my-thing': {
key: 'my-thing',
digest: 'deadbeef',
hashAlgorithm: 'sha512',
path: '.testcache/content/deadbeef',
time: 12345698490,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
},
'other-thing': {
key: 'other-thing',
digest: 'bada55',
hashAlgorithm: 'whirlpool',
path: '.testcache/content/bada55',
time: 11992309289
}
}
> cacache.ls.stream(cache) -> Readable
Lists info for all entries currently in the cache as a single large object.
This works just like ls
, except get.info
entries are
returned as 'data'
events on the returned stream.
Example
cacache.ls.stream(cachePath).on('data', console.log)
{
key: 'my-thing',
digest: 'deadbeef',
hashAlgorithm: 'sha512',
path: '.testcache/content/deadbeef',
time: 12345698490,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
{
key: 'other-thing',
digest: 'bada55',
hashAlgorithm: 'whirlpool',
path: '.testcache/content/bada55',
time: 11992309289
}
{
...
}
> cacache.get(cache, key, [opts]) -> Promise({data, metadata, digest, hashAlgorithm})
Returns an object with the cached data, digest, and metadata identified by
key
. The data
property of this object will be a Buffer
instance that
presumably holds some data that means something to you. I'm sure you know what
to do with it! cacache just won't care. hashAlgorithm
is the algorithm used
to calculate the digest
of the content. This algorithm must be used if you
fetch later with get.byDigest
.
If there is no content identified by key
, or if the locally-stored data does
not pass the validity checksum, the promise will be rejected.
A sub-function, get.byDigest
may be used for identical behavior, except lookup
will happen by content digest, bypassing the index entirely. This version of the
function only returns data
itself, without any wrapper.
Note
This function loads the entire cache entry into memory before returning it. If
you're dealing with Very Large data, consider using get.stream
instead.
Example
cache.get(cachePath, 'my-thing').then(console.log)
{
metadata: {
thingName: 'my'
},
digest: 'deadbeef',
hashAlgorithm: 'sha512'
data: Buffer#<deadbeef>
}
cache.get.byDigest(cachePath, 'deadbeef', {
hashAlgorithm: 'sha512'
}).then(console.log)
Buffer#<deadbeef>
> cacache.get.stream(cache, key, [opts]) -> Readable
Returns a Readable Stream of the cached data identified by key
.
If there is no content identified by key
, or if the locally-stored data does
not pass the validity checksum, an error will be emitted.
metadata
and digest
events will be emitted before the stream closes, if
you need to collect that extra data about the cached entry.
A sub-function, get.stream.byDigest
may be used for identical behavior,
except lookup will happen by content digest, bypassing the index entirely. This
version does not emit the metadata
and digest
events at all.
Example
cache.get.stream(
cachePath, 'my-thing'
).on('metadata', metadata => {
console.log('metadata:', metadata)
}).on('hashAlgorithm', algo => {
console.log('hashAlgorithm:', algo)
}).on('digest', digest => {
console.log('digest:', digest)
}).pipe(
fs.createWriteStream('./x.tgz')
)
metadata: { ... }
hashAlgorithm: 'sha512'
digest: deadbeef
cache.get.stream.byDigest(
cachePath, 'deadbeef', { hashAlgorithm: 'sha512' }
).pipe(
fs.createWriteStream('./x.tgz')
)
> cacache.get.info(cache, key) -> Promise
Looks up key
in the cache index, returning information about the entry if
one exists.
Fields
key
- Key the entry was looked up under. Matches the key
argument.digest
- Content digest the entry refers to.hashAlgorithm
- Hashing algorithm used to generate digest
.path
- Filesystem path relative to cache
argument where content is stored.time
- Timestamp the entry was first added on.metadata
- User-assigned metadata associated with the entry/content.
Example
cacache.get.info(cachePath, 'my-thing').then(console.log)
{
key: 'my-thing',
digest: 'deadbeef',
path: '.testcache/content/deadbeef',
time: 12345698490,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
> cacache.put(cache, key, data, [opts]) -> Promise
Inserts data passed to it into the cache. The returned Promise resolves with a
digest (generated according to opts.hashAlgorithm
) after the
cache entry has been successfully written.
Example
fetch(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).then(data => {
return cacache.put(cachePath, 'registry.npmjs.org|cacache@1.0.0', data)
}).then(digest => {
console.log('digest is', digest)
})
> cacache.put.stream(cache, key, [opts]) -> Writable
Returns a Writable
Stream that inserts
data written to it into the cache. Emits a digest
event with the digest of
written contents when it succeeds.
Example
request.get(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).pipe(
cacache.put.stream(
cachePath, 'registry.npmjs.org|cacache@1.0.0'
).on('digest', d => console.log('digest is ${d}'))
)
> cacache.put options
cacache.put
functions have a number of options in common.
metadata
Arbitrary metadata to be attached to the inserted key.
size
If provided, the data stream will be verified to check that enough data was
passed through. If there's more or less data than expected, insertion will fail
with an EBADSIZE
error.
digest
If present, the pre-calculated digest for the inserted content. If this option
if provided and does not match the post-insertion digest, insertion will fail
with an EBADCHECKSUM
error.
To control the hashing algorithm, use opts.hashAlgorithm
.
hashAlgorithm
Default: 'sha512'
Hashing algorithm to use when calculating the digest for inserted data. Can use
any algorithm listed in crypto.getHashes()
or 'omakase'
/'お任せします'
to
pick a random hash algorithm on each insertion. You may also use any anagram of
'modnar'
to use this feature.
uid
/gid
If provided, cacache will do its best to make sure any new files added to the
cache use this particular uid
/gid
combination. This can be used,
for example, to drop permissions when someone uses sudo
, but cacache makes
no assumptions about your needs here.
memoize
Default: null
If provided, cacache will memoize the given cache insertion in memory, bypassing
any filesystem checks for that key or digest in future cache fetches. Nothing
will be written to the in-memory cache unless this option is explicitly truthy.
There is no facility for limiting memory usage short of
cacache.clearMemoized()
, so be mindful of the sort of data
you ask to get memoized!
Reading from existing memoized data can be forced by explicitly passing
memoize: false
to the reader functions, but their default will be to read from
memory.
> cacache.rm.all(cache) -> Promise
Clears the entire cache. Mainly by blowing away the cache directory itself.
Example
cacache.rm.all(cachePath).then(() => {
console.log('THE APOCALYPSE IS UPON US 😱')
})
> cacache.rm.entry(cache, key) -> Promise
Alias: cacache.rm
Removes the index entry for key
. Content will still be accessible if
requested directly by content address (get.stream.byDigest
).
Example
cacache.rm.entry(cachePath, 'my-thing').then(() => {
console.log('I did not like it anyway')
})
> cacache.rm.content(cache, digest) -> Promise
Removes the content identified by digest
. Any index entries referring to it
will not be usable again until the content is re-added to the cache with an
identical digest.
Example
cacache.rm.content(cachePath, 'deadbeef').then(() => {
console.log('data for my-thing is gone!')
})
> cacache.clearMemoized()
Completely resets the in-memory entry cache.
> tmp.mkdir(cache, opts) -> Promise<Path>
Returns a unique temporary directory inside the cache's tmp
dir. This
directory will use the same safe user assignment that all the other stuff use.
Once the directory is made, it's the user's responsibility that all files within
are made according to the same opts.gid
/opts.uid
settings that would be
passed in. If not, you can ask cacache to do it for you by calling
tmp.fix()
, which will fix all tmp directory permissions.
If you want automatic cleanup of this directory, use
tmp.withTmp()
Example
cacache.tmp.mkdir(cache).then(dir => {
fs.writeFile(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
})
> tmp.withTmp(cache, opts, cb) -> Promise
Creates a temporary directory with tmp.mkdir()
and calls cb
with it. The created temporary directory will be removed when the return value
of cb()
resolves -- that is, if you return a Promise from cb()
, the tmp
directory will be automatically deleted once that promise completes.
The same caveats apply when it comes to managing permissions for the tmp dir's
contents.
Example
cacache.tmp.withTmp(cache, dir => {
return fs.writeFileAsync(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
}).then(() => {
})
> cacache.verify(cache, opts) -> Promise
Checks out and fixes up your cache:
- Cleans up corrupted or invalid index entries.
- Custom entry filtering options.
- Garbage collects any content entries not referenced by the index.
- Checks digests for all content entries and removes invalid content.
- Fixes cache ownership.
- Removes the
tmp
directory in the cache and all its contents.
When it's done, it'll return an object with various stats about the verification
process, including amount of storage reclaimed, number of valid entries, number
of entries removed, etc.
Options
opts.uid
- uid to assign to cache and its contentsopts.gid
- gid to assign to cache and its contentsopts.filter
- receives a formatted entry. Return false to remove it.
Note: might be called more than once on the same entry.
Example
echo somegarbage >> $CACHEPATH/content/deadbeef
cacache.verify(cachePath).then(stats => {
console.log('cache is much nicer now! stats:', stats)
})
> cacache.verify.lastRun(cache) -> Promise
Returns a Date
representing the last time cacache.verify
was run on cache
.
Example
cacache.verify(cachePath).then(() => {
cacache.verify.lastRun(cachePath).then(lastTime => {
console.log('cacache.verify was last called on' + lastTime)
})
})